-
Notifications
You must be signed in to change notification settings - Fork 700
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Statement Distribution Per Peer Rate Limit #3444
Conversation
return | ||
// If peer currently being served drop request | ||
if active_peers.contains(&request.peer) { | ||
continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this be handled in the context of a candidate/relay parent ? Otherwise we would drop legit requests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is definitely a fair point. Rob mentioned it https://github.com/paritytech-secops/srlabs_findings/issues/303#issuecomment-1588185410 as well.
It's a double-edged sword. If we make it 1 per context then the request/response protocol is more efficient, but with async backing enabled (which to my understanding broadens the scope of possible contexts) the mpsc channel (size 25-ish) can still be easily overwhelmed. The problem of dropped requests gets somewhat a bit better after the recent commit [Rate limit sending from the requester side](https://github.com/paritytech/polkadot-sdk/pull/3444/commits/58d828db45b7c023e8a8266d3180be4361723519)
which also introduces the rate limit on the requester side so effort is not wasted.
If we are already requesting from a peer and want to request something else as well we simply wait for the first one to finish.
What could be argued is the limit as a small constant 1-3 instead of 1 per peerID. I would like to do some testing and more reviews and changing it to a configurable constant instead of a hard limit to 1 is actually a trivial change. (I don't think it's necessary. Better have a single completely sent candidate than two half sent candidates.)
Truth be told this whole solution is far from ideal. It's a fix but the final solution is to manage DoS at a significantly lower level than in parachain consensus. It will do for now but DoS needs to be looked on in more detail in near future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is definitely a fair point. Rob mentioned it paritytech-secops/srlabs_findings#303 (comment) as well.
It's a double-edged sword. If we make it 1 per context then the request/response protocol is more efficient, but with async backing enabled (which to my understanding broadens the scope of possible contexts) the mpsc channel (size 25-ish) can still be easily overwhelmed. The problem of dropped requests gets somewhat a bit better after the recent commit
[Rate limit sending from the requester side](https://github.com/paritytech/polkadot-sdk/pull/3444/commits/58d828db45b7c023e8a8266d3180be4361723519)
which also introduces the rate limit on the requester side so effort is not wasted.If we are already requesting from a peer and want to request something else as well we simply wait for the first one to finish.
I would rather put the constraint on the responder side rather than the requester. The reasoning is to avoid any bugs in other clients implementation due to the subtle requirement to wait for first request to finish.
What could be argued is the limit as a small constant 1-3 instead of 1 per peerID. I would like to do some testing and more reviews and changing it to a configurable constant instead of a hard limit to 1 is actually a trivial change. (I don't think it's necessary. Better have a single completely sent candidate than two half sent candidates.)
I totally agree with DoS protection as low as possible in the stack, but it seems to be sensible. Maybe instead of relying on a hardcoded constant we can derive the value based on the async backing parameters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would rather put the constraint on the responder side rather than the requester. The reasoning is to avoid any bugs in other clients implementation due to the subtle requirement to wait for first request to finish.
I agree that constraint on the responder side is more important, but having it in both places seems even better.
The reasoning is to avoid any bugs in other clients implementation due to the subtle requirement to wait for first request to finish.
Even without this PR the risk seems exactly the same. We already were checking for the number of parallel requests being handled in the requester so even if there would be bugs in other clients they could surface there with or without this PR.
The only really bad scenario would be if for some reason we keep a PeerID marked as still being active when it really isn't effectively blacklisting that peer. If that would happen it means that the future connected to that PeerID is still in the pending_responses
which would eventually brick the requester anyway since we cannotgo over the max limit of MAX_PARALLEL_ATTESTED_CANDIDATE_REQUESTS
. Both situations are just as fatal, but the new system at least has extra protections against DoS attempts so it's a security gain.
By adding the constraint on the requester side we limit the wasted effort and can potentially safely add reputation updates if we get those unsolicited requests which protects us from DoS even further. (Some honest requests might slip through so the rep update needs to be small.)
Maybe instead of relying on a hardcoded constant we can derive the value based on the async backing parameters.
We potentially could but also I don't why this value would need to change a lot. Even if we change some async backing params we should be fine. It seems to be more sensitive to channel size as we want to make it hard to dominate that queue as a malicious node and the MAX_PARALLEL_ATTESTED_CANDIDATE_REQUESTS
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need not choose a uniform distribution over recent relay parents either. Instead the full value on the first couple, then 1/2 and 1/4 for a few, and then 1 one for a while. You need to figure out if the selected distribution breaks collators, but maybe works.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added some comments, what is not clear to me is why can_request
from answer_request
is not considered fast enough, that already has the benefit that we punish peers that are misbehaving.
Just my2cents, if what we want is to prevent peers from exhausting the buget for the number of connections, I would think the proper fix needs to go deeper into the layer network, rather than here, since there is a lot time spent until the message reaches here.
re: @alexggh
Technically fair point but I think there's a reason. It's not about speed but technically yeah Even before this change we tracked the parallel requests up to the limit ( The
I 100% agree with you here. I said this myself and me and Robert have already spoken about it and this approach will be pursued. That's why this solution is not ideal but it at least makes the attack vector much harder to pull off with minimal changes to the current code-base. Not ideal but worth the tiny effort before we tackle the wider more general issue at the lower level. |
msg: NetworkBridgeTxMessage::SendRequests(mut requests, if_disconnected), | ||
} => { | ||
// AttestedCandidateV2 requests arrive 1 by 1 | ||
if requests.len() == 1 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might be true for now, but can be easily broken in the future. Why not loop through all incoming requests ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handled multiple requests 👍
for _ in 0..self.spam_factor - 1 { | ||
let (new_outgoing_request, _) = OutgoingRequest::new( | ||
peer_to_duplicate.clone(), | ||
payload_to_duplicate.clone(), | ||
); | ||
let new_request = Requests::AttestedCandidateV2(new_outgoing_request); | ||
requests.push(new_request); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should work without needing peer_to_duplicate
and payload_to_duplicate
clones:
for _ in 0..self.spam_factor - 1 { | |
let (new_outgoing_request, _) = OutgoingRequest::new( | |
peer_to_duplicate.clone(), | |
payload_to_duplicate.clone(), | |
); | |
let new_request = Requests::AttestedCandidateV2(new_outgoing_request); | |
requests.push(new_request); | |
} | |
requests.extend(0..self.spam_factor - 1).into_iter().map(|_| | |
Requests::AttestedCandidateV2(OutgoingRequest::new( | |
peer_to_duplicate.clone(), | |
payload_to_duplicate.clone(), | |
))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Borrowed the extend syntax but still need the clones to satisfy the borrow checker.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work!
|
||
} | ||
_ => { | ||
new_requests.push(request); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can dedup this line by moving it above the match.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one is somewhat of a conscious trade-off but if you see a graceful way of doing it feel free to suggest it.
Problem is Request does not implement copy/clone. I could possibly track the index where I push it but this will make it even less readable imo.
polkadot/node/network/statement-distribution/src/v2/requests.rs
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good job!
* 'master' of https://github.com/metaspan/polkadot-sdk: (65 commits) Introduces `TypeWithDefault<T, D: Get<T>>` (paritytech#4034) Publish `polkadot-sdk-frame` crate (paritytech#4370) Add validate field to prdoc (paritytech#4368) State trie migration on asset-hub westend and collectives westend (paritytech#4185) Fix: dust unbonded for zero existential deposit (paritytech#4364) Bridge: added subcommand to relay single parachain header (paritytech#4365) Bridge: fix zombienet tests (paritytech#4367) [WIP][CI] Add more GHA jobs (paritytech#4270) Allow for 0 existential deposit in benchmarks for `pallet_staking`, `pallet_session`, and `pallet_balances` (paritytech#4346) Deprecate `NativeElseWasmExecutor` (paritytech#4329) More `xcm::v4` cleanup and `xcm_fee_payment_runtime_api::XcmPaymentApi` nits (paritytech#4355) sc-tracing: enable env-filter feature (paritytech#4357) deps: update jsonrpsee to v0.22.5 (paritytech#4330) Add PoV-reclaim enablement guide to polkadot-sdk-docs (paritytech#4244) cargo: Update experimental litep2p to latest version (paritytech#4344) Bridge: ignore client errors when calling recently added `*_free_headers_interval` methods (paritytech#4350) Make parachain template great again (and async backing ready) (paritytech#4295) [Backport] Version bumps and reorg prdocs from 1.11.0 (paritytech#4336) HRMP - set `DefaultChannelSizeAndCapacityWithSystem` with dynamic values according to the `ActiveConfig` (paritytech#4332) Statement Distribution Per Peer Rate Limit (paritytech#3444) ...
- [x] Drop requests from a PeerID that is already being served by us. - [x] Don't sent requests to a PeerID if we already are requesting something from them at that moment (prioritise other requests or wait). - [x] Tests - [ ] ~~Add a small rep update for unsolicited requests (same peer request)~~ not included in original PR due to potential issues with nodes slowly updating - [x] Add a metric to track the amount of dropped requests due to peer rate limiting - [x] Add a metric to track how many time a node reaches the max parallel requests limit in v2+ Helps with but does not close yet: https://github.com/paritytech-secops/srlabs_findings/issues/303
Add a small rep update for unsolicited requests (same peer request)not included in original PR due to potential issues with nodes slowly updatingHelps with but does not close yet: https://github.com/paritytech-secops/srlabs_findings/issues/303